314 research outputs found

    ViotSOC: Controlling Access to Dynamically Virtualized IoT Services using Service Object Capability

    Get PDF
    Virtualization of Internet of Things(IoT) is a concept of dynamically building customized high-level IoT services which rely on the real time data streams from low-level physical IoT sensors. Security in IoT virtualization is challenging, because with the growing number of available (building block) services, the number of personalizable virtual services grows exponentially. This paper proposes Service Object Capability(SOC) ticket system, a decentralized access control mechanism between servers and clients to effi- ciently authenticate and authorize each other without using public key cryptography. SOC supports decentralized partial delegation of capabilities specified in each server/- client ticket. Unlike PKI certificates, SOC’s authentication time and handshake packet overhead stays constant regardless of each capability’s delegation hop distance from the root delegator. The paper compares SOC’s security bene- fits with Kerberos and the experimental results show SOC’s authentication incurs significantly less time packet overhead compared against those from other mechanisms based on RSA-PKI and ECC-PKI algorithms. SOC is as secure as, and more efficient and suitable for IoT environments, than existing PKIs and Kerberos

    Fog Network Task Scheduling for IoT Applications

    Get PDF
    In the Internet of Things (IoT) networks, the data traffic would be very bursty and unpredictable. It is therefore very difficult to analyze and guarantee the delay performance for delay-sensitive IoT applications in fog networks, such as emergency monitoring, intelligent manufacturing, and autonomous driving. To address this challenging problem, a Bursty Elastic Task Scheduling (BETS) algorithm is developed to best accommodate bursty task arrivals and various requirements in IoT networks, thus optimizing service experience for delay-sensitive applications with only limited communication resources in time-varying and competing environments. To better describe the stability and consistence of Quality of Service (QoS) in realistic scenarios, a new performance metric "Bursty Service Experience Index (BSEI)" is defined and quantified as delay jitter normalized by the average delay. Finally, the numeral results shows that the performance of BETS is fully evaluated, which can achieve 5-10 times lower BSEI than traditional task scheduling algorithms, e.g. Proportional Fair (PF) and the Max Carrier-to-Interference ratio (MCI), under bursty traffic conditions. These results demonstrate that BETS can effectively smooth down the bursty characteristics in IoT networks, and provide much predictable and acceptable QoS for delay-sensitive applications

    Realizing Video Analytic Service in the Fog-Based Infrastructure-Less Environments

    Get PDF
    Deep learning has unleashed the great potential in many fields and now is the most significant facilitator for video analytics owing to its capability to providing more intelligent services in a complex scenario. Meanwhile, the emergence of fog computing has brought unprecedented opportunities to provision intelligence services in infrastructure-less environments like remote national parks and rural farms. However, most of the deep learning algorithms are computationally intensive and impossible to be executed in such environments due to the needed supports from the cloud. In this paper, we develop a video analytic framework, which is tailored particularly for the fog devices to realize video analytic service in a rapid manner. Also, the convolution neural networks are used as the core processing unit in the framework to facilitate the image analysing process

    Improving Robustness of Graph Neural Networks with Heterophily-Inspired Designs

    Full text link
    Recent studies have exposed that many graph neural networks (GNNs) are sensitive to adversarial attacks, and can suffer from performance loss if the graph structure is intentionally perturbed. A different line of research has shown that many GNN architectures implicitly assume that the underlying graph displays homophily, i.e., connected nodes are more likely to have similar features and class labels, and perform poorly if this assumption is not fulfilled. In this work, we formalize the relation between these two seemingly different issues. We theoretically show that in the standard scenario in which node features exhibit homophily, impactful structural attacks always lead to increased levels of heterophily. Then, inspired by GNN architectures that target heterophily, we present two designs -- (i) separate aggregators for ego- and neighbor-embeddings, and (ii) a reduced scope of aggregation -- that can significantly improve the robustness of GNNs. Our extensive empirical evaluations show that GNNs featuring merely these two designs can achieve significantly improved robustness compared to the best-performing unvaccinated model with 24.99% gain in average performance under targeted attacks, while having smaller computational overhead than existing defense mechanisms. Furthermore, these designs can be readily combined with explicit defense mechanisms to yield state-of-the-art robustness with up to 18.33% increase in performance under attacks compared to the best-performing vaccinated model.Comment: preprint with appendix; 30 pages, 1 figur

    Transcriptome analysis reveals the time of the fourth round of genome duplication in common carp (Cyprinus carpio)

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Common carp (<it>Cyprinus carpio</it>) is thought to have undergone one extra round of genome duplication compared to zebrafish. Transcriptome analysis has been used to study the existence and timing of genome duplication in species for which genome sequences are incomplete. Large-scale transcriptome data for the common carp genome should help reveal the timing of the additional duplication event.</p> <p>Results</p> <p>We have sequenced the transcriptome of common carp using 454 pyrosequencing. After assembling the 454 contigs and the published common carp sequences together, we obtained 49,669 contigs and identified genes using homology searches and an ab initio method. We identified 4,651 orthologous pairs between common carp and zebrafish and found 129,984 paralogous pairs within the common carp. An estimation of the synonymous substitution rate in the orthologous pairs indicated that common carp and zebrafish diverged 120 million years ago (MYA). We identified one round of genome duplication in common carp and estimated that it had occurred 5.6 to 11.3 MYA. In zebrafish, no genome duplication event after speciation was observed, suggesting that, compared to zebrafish, common carp had undergone an additional genome duplication event. We annotated the common carp contigs with Gene Ontology terms and KEGG pathways. Compared with zebrafish gene annotations, we found that a set of biological processes and pathways were enriched in common carp.</p> <p>Conclusions</p> <p>The assembled contigs helped us to estimate the time of the fourth-round of genome duplication in common carp. The resource that we have built as part of this study will help advance functional genomics and genome annotation studies in the future.</p

    Sapper: Subgraph indexing and approximate matching in large graphs

    Get PDF
    ABSTRACT With the emergence of new applications, e.g., computational biology, new software engineering techniques, social networks, etc., more data is in the form of graphs. Locating occurrences of a query graph in a large database graph is an important research topic. Due to the existence of noise (e.g., missing edges) in the large database graph, we investigate the problem of approximate subgraph indexing, i.e., finding the occurrences of a query graph in a large database graph with (possible) missing edges. The SAPPER method is proposed to solve this problem. Utilizing the hybrid neighborhood unit structures in the index, SAPPER takes advantage of pre-generated random spanning trees and a carefully designed graph enumeration order. Real and synthetic data sets are employed to demonstrate the efficiency and scalability of our approximate subgraph indexing method

    Cloud-Enhanced Robotic System for Smart City Crowd Control

    Get PDF
    Cloud robotics in smart cities is an emerging paradigm that enables autonomous robotic agents to communicate and collaborate with a cloud computing infrastructure. It complements the Internet of Things (IoT) by creating an expanded network where robots offload data-intensive computation to the ubiquitous cloud to ensure quality of service (QoS). However, offloading for robots is significantly complex due to their unique characteristics of mobility, skill-learning, data collection, and decision-making capabilities. In this paper, a generic cloud robotics framework is proposed to realize smart city vision while taking into consideration its various complexities. Specifically, we present an integrated framework for a crowd control system where cloud-enhanced robots are deployed to perform necessary tasks. The task offloading is formulated as a constrained optimization problem capable of handling any task flow that can be characterized by a Direct Acyclic Graph (DAG).We consider two scenarios of minimizing energy and time, respectively, and develop a genetic algorithm (GA)-based approach to identify the optimal task offloading decisions. The performance comparison with two benchmarks shows that our GA scheme achieves desired energy and time performance. We also show the adaptability of our algorithm by varying the values for bandwidth and movement. The results suggest their impact on offloading. Finally, we present a multi-task flow optimal path sequence problem that highlights how the robot can plan its task completion via movements that expend the minimum energy. This integrates path planning with offloading for robotics. To the best of our knowledge, this is the first attempt to evaluate cloud-based task offloading for a smart city crowd control system

    A Fast and Scalable Authentication Scheme in IoT for Smart Living

    Full text link
    Numerous resource-limited smart objects (SOs) such as sensors and actuators have been widely deployed in smart environments, opening new attack surfaces to intruders. The severe security flaw discourages the adoption of the Internet of things in smart living. In this paper, we leverage fog computing and microservice to push certificate authority (CA) functions to the proximity of data sources. Through which, we can minimize attack surfaces and authentication latency, and result in a fast and scalable scheme in authenticating a large volume of resource-limited devices. Then, we design lightweight protocols to implement the scheme, where both a high level of security and low computation workloads on SO (no bilinear pairing requirement on the client-side) is accomplished. Evaluations demonstrate the efficiency and effectiveness of our scheme in handling authentication and registration for a large number of nodes, meanwhile protecting them against various threats to smart living. Finally, we showcase the success of computing intelligence movement towards data sources in handling complicated services.Comment: 15 pages, 7 figures, 3 tables, to appear in FGC
    • …
    corecore